翻訳と辞書
Words near each other
・ Trusted execution environment
・ Trusted Execution Technology
・ Trusted Information Systems
・ Trusted Internet Connection
・ Trusted Like the Fox
・ Trusted Media Brands, Inc.
・ Trusted Mole
・ Trusted Network Connect
・ Trusted operating system
・ Trusted path
・ Trusted Platform Module
・ Trusted service manager
・ Trusted Solaris
・ Trusted Sources
・ Trusted Storage specification
Trusted system
・ Trusted third party
・ Trusted time
・ Trusted timestamping
・ TrustedID
・ TrustedSource
・ Trustee
・ Trustee (disambiguation)
・ Trustee Act 1925
・ Trustee Act 2000
・ Trustee de son tort
・ Trustee from the Toolroom
・ Trustee Georgia
・ Trustee in bankruptcy
・ Trustee Investments Act 1961


Dictionary Lists
翻訳と辞書 辞書検索 [ 開発暫定版 ]
スポンサード リンク

Trusted system : ウィキペディア英語版
Trusted system
In the security engineering subspecialty of computer science, a trusted system is a system that is relied upon to a specified extent to enforce a specified security policy. As such, a trusted system is one whose failure may break a specified security policy.
==Trusted systems in classified information==

A subset of trusted systems ("Division B" and "Division A") implement mandatory access control labels; as such, it is often assumed that they can be used for processing classified information. However, this is generally untrue. There are four modes in which one can operate a multilevel secure system (''viz..'', multilevel mode, compartmented mode, dedicated mode, and system-high mode) and, as specified by the National Computer Security Center's "Yellow Book," B3 and A1 systems can only be used for processing a strict subset of security labels, and only when operated according to a particularly strict configuration.
Central to the concept of U.S. Department of Defense-style "trusted systems" is the notion of a "reference monitor", which is an entity that occupies the logical heart of the system and is responsible for all access control decisions. Ideally, the reference monitor is (a) tamperproof, (b) always invoked, and (c) small enough to be subject to independent testing, the completeness of which can be assured. Per the U.S. National Security Agency's 1983 Trusted Computer System Evaluation Criteria (TCSEC), or "Orange Book", a set of "evaluation classes" were defined that described the features and assurances that the user could expect from a trusted system.
Key to the provision of the highest levels of assurance (B3 and A1) is the dedication of significant system engineering toward minimization of the complexity (not ''size'', as often cited) of the trusted computing base (TCB), defined as that combination of hardware, software, and firmware that is responsible for enforcing the system's security policy.
An inherent engineering conflict would appear to arise in higher-assurance systems in that, the smaller the TCB, the larger the set of hardware, software, and firmware that lies outside the TCB and is, therefore, untrusted. Although this may lead the more technically naive to sophists' arguments about the nature of trust, the argument confuses the issue of "correctness" with that of "trustworthiness."
In contrast to the TCSEC's precisely defined hierarchy of six evaluation classes—the highest of which, A1, is featurally identical to B3, differing only in documentation standards—the more recently introduced Common Criteria (CC)—which derive from a blend of more or less technically mature standards from various NATO countries—provide a more tenuous spectrum of seven "evaluation classes" that intermix features and assurances in an arguably non-hierarchical manner and lack the philosophic precision and mathematical stricture of the TCSEC. In particular, the CC tolerate very loose identification of the "target of evaluation" (TOE) and support—even encourage—an intermixture of security requirements culled from a variety of predefined "protection profiles." While a strong case can be made that even the more seemingly arbitrary components of the TCSEC contribute to a "chain of evidence" that a fielded system properly enforces its advertised security policy, not even the highest (E7) level of the CC can truly provide analogous consistency and stricture of evidentiary reasoning.
The mathematical notions of trusted systems for the protection of classified information derive from two independent but interrelated corpora of work. In 1974, David Bell and Leonard LaPadula of MITRE, working under the close technical guidance and economic sponsorship of Maj. Roger Schell, Ph.D., of the U.S. Army Electronic Systems Command (Ft. Hanscom, MA), devised what is known as the Bell-LaPadula model, in which a more or less trustworthy computer system is modeled in terms of objects (passive repositories or destinations for data, such as files, disks, printers) and subjects (active entities—perhaps users, or system processes or threads operating on behalf of those users—that cause information to flow among objects). The entire operation of a computer system can indeed be regarded a "history" (in the serializability-theoretic sense) of pieces of information flowing from object to object in response to subjects' requests for such flows.
At the same time, Dorothy Denning at Purdue University was publishing her Ph.D. dissertation, which dealt with "lattice-based information flows" in computer systems. (A mathematical "lattice" is a partially ordered set, characterizable as a directed acyclic graph, in which the relationship between any two vertices is either "dominates," "is dominated by," or neither.) She defined a generalized notion of "labels"—corresponding more or less to the full security markings one encounters on classified military documents, ''e.g.'', TOP SECRET WNINTEL TK DUMBO—that are attached to entities. Bell and LaPadula integrated Denning's concept into their landmark MITRE technical report—entitled, ''Secure Computer System: Unified Exposition and Multics Interpretation''—whereby labels attached to objects represented the sensitivity of data contained within the object (though there can be, and often is, a subtle semantic difference between the sensitivity of the data within the object and the sensitivity of the object itself)), while labels attached to subjects represented the trustworthiness of the user executing the subject. The concepts are unified with two properties, the "simple security property" (a subject can only read from an object that it ''dominates'' (greater than'' is a close enough—albeit mathematically imprecise—interpretation )) and the "confinement property," or "
*-property" (a subject can only write to an object that dominates it). (These properties are loosely referred to as "no-read-up" and "no-write-down," respectively.) Jointly enforced, these properties ensure that information cannot flow "downhill" to a repository whence insufficiently trustworthy recipients may discover it. By extension, assuming that the labels assigned to subjects are truly representative of their trustworthiness, then the no-read-up and no-write-down rules rigidly enforced by the reference monitor are provably sufficient to constrain Trojan horses, one of the most general classes of attack (''sciz.'', the popularly reported worms and viruses are specializations of the Trojan horse concept).
The Bell-LaPadula model technically enforces only "confidentiality," or "secrecy," controls, ''i.e.'', they address the problem of the sensitivity of objects and attendant trustworthiness of subjects not inappropriately to disclose it. The dual problem of "integrity," i.e., the problem of accuracy (even provenance) of objects and attendant trustworthiness of subjects not inappropriately to modify or destroy it, is addressed by mathematically affine models, the most important of which is named for its creator, K. J. Biba. Other integrity models include the Clark-Wilson model and Shockley and Schell's program integrity model.
An important feature of the class of security controls described ''supra'', termed mandatory access controls, or MAC, is that they are entirely beyond the control of any user: the TCB automatically attaches labels to any subjects executed on behalf of users; files created, deleted, read, or written by users; and so forth. In contrast, an additional class of controls, termed discretionary access controls, are under the direct control of the system users. Familiar protection mechanisms such as permission bits (supported by UNIX since the late 1960s and—in a more flexible and powerful form—by Multics since earlier still) and access control lists (ACLs) are familiar examples of discretionary access controls.
The behavior of a trusted system is often characterized in terms of a mathematical model—which may be more or less rigorous, depending upon applicable operational and administrative constraints—that takes the form of a finite state machine (FSM) with state criteria; state transition constraints; a set of "operations" that correspond to state transitions (usually, but not necessarily, one); and a descriptive top-level specification, or DTLS, entailing a user-perceptible interface (''e.g.'', an API, a set of system calls (UNIX parlance ) or system exits (mainframe parlance )), each element of which engenders one or more model operations.

抄文引用元・出典: フリー百科事典『 ウィキペディア(Wikipedia)
ウィキペディアで「Trusted system」の詳細全文を読む



スポンサード リンク
翻訳と辞書 : 翻訳のためのインターネットリソース

Copyright(C) kotoba.ne.jp 1997-2016. All Rights Reserved.